播客已经出现在大量消耗的在线内容中,特别是由于生产手段的可访问性和通过大型流平台进行缩放分布。分类系统和信息访问技术通常使用主题作为组织或导航播客集合的主要方式。然而,用主题注释播客仍然是非常有问题的,因为分配的编辑类型是广泛的,异构或误导性的,或者因为数据挑战(例如,MetaData文本短,嘈杂的成绩单)。在这里,我们使用主题建模技术来评估从播客元数据,标题和描述中发现相关主题的可行性。我们还提出了一种新的策略来利用命名实体(NES),通常存在于播客元数据中,以非负矩阵分解(NMF)主题建模框架。我们在Spotify和iTunes和Deezer中的两个现有数据集的实验,该数据来自提供播客目录的新数据集,显示我们所提出的文档表示Neice,导致基于基线的主题连贯性。我们释放了结果的实验​​性再现性的代码。
translated by 谷歌翻译
我们表明,在AutoEncoders(AE)的潜在空间中使用最近的邻居显着提高了单一和多级上下文中半监督新颖性检测的性能。通过学习来检测新奇的方法,以区分非新颖培训类和所有其他看不见的课程。我们的方法利用了最近邻居的重建和给定输入的潜在表示的潜在邻居的结合。我们证明了我们最近的潜在邻居(NLN)算法是内存和时间效率,不需要大量的数据增强,也不依赖于预先训练的网络。此外,我们表明NLN算法很容易应用于多个数据集而无需修改。此外,所提出的算法对于AutoEncoder架构和重建错误方法是不可知的。我们通过使用重建,剩余或具有一致损耗,验证了多个不同的自动码架构,如诸如香草,对抗和变形自身额度的各种标准数据集的方法。结果表明,NLN算法在多级案例的接收器操作特性(AUROC)曲线性能下授予面积增加17%,为单级新颖性检测8%。
translated by 谷歌翻译
对无监督对象发现的现有方法(UOD)不会向大大扩展到大型数据集,而不会损害其性能的近似。我们提出了一种新颖的UOD作为排名问题的制定,适用于可用于特征值问题和链接分析的分布式方法的阿森纳。通过使用自我监督功能,我们还展示了UOD的第一个有效的完全无监督的管道。对Coco和OpenImages的广泛实验表明,在每个图像中寻求单个突出对象的单对象发现设置中,所提出的LOD(大规模对象发现)方法与之相当于或更好地中型数据集的艺术(最多120K图像),比能够缩放到1.7M图像的唯一其他算法超过37%。在每个图像中寻求多个对象的多对象发现设置中,所提出的LOD平均精度(AP)比所有其他用于从20K到1.7M图像的数据的方法更好。使用自我监督功能,我们还表明该方法在OpenImages上获得最先进的UOD性能。我们的代码在HTTPS://github.com/huyvvo/lod上公开提供。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Real-life tools for decision-making in many critical domains are based on ranking results. With the increasing awareness of algorithmic fairness, recent works have presented measures for fairness in ranking. Many of those definitions consider the representation of different ``protected groups'', in the top-$k$ ranked items, for any reasonable $k$. Given the protected groups, confirming algorithmic fairness is a simple task. However, the groups' definitions may be unknown in advance. In this paper, we study the problem of detecting groups with biased representation in the top-$k$ ranked items, eliminating the need to pre-define protected groups. The number of such groups possible can be exponential, making the problem hard. We propose efficient search algorithms for two different fairness measures: global representation bounds, and proportional representation. Then we propose a method to explain the bias in the representations of groups utilizing the notion of Shapley values. We conclude with an experimental study, showing the scalability of our approach and demonstrating the usefulness of the proposed algorithms.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译